12 research outputs found

    User-centered design and evaluation of interactive segmentation methods for medical images

    Get PDF
    Segmentation of medical images is a challenging task that aims to identify a particular structure present on the image. Among the existing methods involving the user at different levels, from a fully-manual to a fully-automated task, interactive segmentation methods provide assistance to the user during the task to reduce the variability in the results and allow occasional corrections of segmentation failures. Therefore, they offer a compromise between the segmentation efficiency and the accuracy of the results. It is the user who judges whether the results are satisfactory and how to correct them during the segmentation, making the process subject to human factors. Despite the strong influence of the user on the outcomes of a segmentation task, the impact of such factors has received little attention, with the literature focusing the assessment of segmentation processes on computational performance. Yet, involving the user performance in the analysis is more representative of a realistic scenario. Our goal is to explore the user behaviour in order to improve the efficiency of interactive image segmentation processes. This is achieved through three contributions. First, we developed a method which is based on a new user interaction mechanism to provide hints as to where to concentrate the computations. This significantly improves the computation efficiency without sacrificing the quality of the segmentation. The benefits of using such hints are twofold: (i) because our contribution is based on user interaction, it generalizes to a wide range of segmentation methods, and (ii) it gives comprehensive indications about where to focus the segmentation search. The latter advantage is used to achieve the second contribution. We developed an automated method based on a multi-scale strategy to: (i) reduce the user’s workload and, (ii) improve the computational time up to tenfold, allowing real-time segmentation feedback. Third, we have investigated the effects of such improvements in computations on the user’s performance. We report an experiment that manipulates the delay induced by the computation time while performing an interactive segmentation task. Results reveal that the influence of this delay can be significantly reduced with an appropriate interaction mechanism design. In conclusion, this project provides an effective image segmentation solution that has been developed in compliance with user performance requirements. We validated our approach through multiple user studies that provided a step forward into understanding the user behaviour during interactive image segmentation

    Hessian-based Similarity Metric for Multimodal Medical Image Registration

    Full text link
    One of the fundamental elements of both traditional and certain deep learning medical image registration algorithms is measuring the similarity/dissimilarity between two images. In this work, we propose an analytical solution for measuring similarity between two different medical image modalities based on the Hessian of their intensities. First, assuming a functional dependence between the intensities of two perfectly corresponding patches, we investigate how their Hessians relate to each other. Secondly, we suggest a closed-form expression to quantify the deviation from this relationship, given arbitrary pairs of image patches. We propose a geometrical interpretation of the new similarity metric and an efficient implementation for registration. We demonstrate the robustness of the metric to intensity nonuniformities using synthetic bias fields. By integrating the new metric in an affine registration framework, we evaluate its performance for MRI and ultrasound registration in the context of image-guided neurosurgery using target registration error and computation time

    Latency management in scribble-based interactive segmentation of medical images

    Get PDF
    Objective: During an interactive image segmentation task, the outcome is strongly influenced by human factors. In particular, a reduction in computation time does not guarantee an improvement in the overall segmentation time. This paper characterizes user efficiency during scribble-based interactive segmentation as a function of computation time. Methods: We report a controlled experiment with users who experienced eight different levels of simulated latency (ranging from 100 to 2000 ms) with two techniques for refreshing visual feedback (either automatic, where the segmentation was recomputed and displayed continuously during label drawing, or user initiated, which was only computed and displayed each time the user hits a defined button). Results: For short latencies, the user's attention is focused on the automatic visual feedback, slowing down his/her labeling performance. This effect is attenuated as the latency grows larger, and the two refresh techniques yield similar user performance at the largest latencies. Moreover, during the segmentation task, participants spent in average 72.67% ± 2.42% for automatic refresh and 96.23% ± 0.06% for user-initiated refresh of the overall segmentation time interpreting the results. Conclusion: The latency is perceived differently according to the refresh method used during the segmentation task. Therefore, it is possible to reduce its impact on the user performance. Significance: This is the first time a study investigates the effects of latency in an interactive segmentation task. The analysis and recommendations provided in this paper help understanding the cognitive mechanisms in interactive image segmentation

    The state-of-the-art in ultrasound-guided spine interventions.

    Get PDF
    During the last two decades, intra-operative ultrasound (iUS) imaging has been employed for various surgical procedures of the spine, including spinal fusion and needle injections. Accurate and efficient registration of pre-operative computed tomography or magnetic resonance images with iUS images are key elements in the success of iUS-based spine navigation. While widely investigated in research, iUS-based spine navigation has not yet been established in the clinic. This is due to several factors including the lack of a standard methodology for the assessment of accuracy, robustness, reliability, and usability of the registration method. To address these issues, we present a systematic review of the state-of-the-art techniques for iUS-guided registration in spinal image-guided surgery (IGS). The review follows a new taxonomy based on the four steps involved in the surgical workflow that include pre-processing, registration initialization, estimation of the required patient to image transformation, and a visualization process. We provide a detailed analysis of the measurements in terms of accuracy, robustness, reliability, and usability that need to be met during the evaluation of a spinal IGS framework. Although this review is focused on spinal navigation, we expect similar evaluation criteria to be relevant for other IGS applications

    A Generalized Graph Reduction Framework for Interactive Segmentation of Large Images

    Get PDF
    The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into “layers” (corresponding to different scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground/background labels can then be added by the user to refine the segmentation. All iterations of the graph-based segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p < 0.01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods for further graph reduction, resulting in even faster segmentation

    Open-source software for ultrasound-based guidance in spinal fusion surgery.

    Get PDF
    Spinal instrumentation and surgical manipulations may cause loss of navigation accuracy requiring an efficient re-alignment of the patient anatomy with pre-operative images during surgery. While intra-operative ultrasound (iUS) guidance has shown clear potential to reduce surgery time, compared with clinical computed tomography (CT) guidance, rapid registration aiming to correct for patient misalignment has not been addressed. In this article, we present an open-source platform for pedicle screw navigation using iUS imaging. The alignment method is based on rigid registration of CT to iUS vertebral images and has been designed for fast and fully automatic patient re-alignment in the operating room. Two steps are involved: first, we use the iUS probe's trajectory to achieve an initial coarse registration; then, the registration transform is refined by simultaneously optimizing gradient orientation alignment and mean of iUS intensities passing through the CT-defined posterior surface of the vertebra. We evaluated our approach on a lumbosacral section of a porcine cadaver with seven vertebral levels. We achieved a median target registration error of 1.47 mm (100% success rate, defined by a target registration error <2 mm) when applying the probe's trajectory initial alignment. The approach exhibited high robustness to partial visibility of the vertebra with success rates of 89.86% and 88.57% when missing either the left or right part of the vertebra and robustness to initial misalignments with a success rate of 83.14% for random starts within ±20° rotation and ±20 mm translation. Our graphics processing unit implementation achieves an efficient registration time under 8 s, which makes the approach suitable for clinical application

    Evaluation of an Ultrasound-Based Navigation System for Spine Neurosurgery: A Porcine Cadaver Study

    Get PDF
    During the last two decades, intra-operative ultrasound (iUS) imaging has been employed for various surgical procedures of the spine, including spinal fusion and needle injections. Accurate and efficient registration of pre-operative computed tomography or magnetic resonance images with iUS images are key elements in the success of iUS-based spine navigation. While widely investigated in research, iUS-based spine navigation has not yet been established in the clinic. This is due to several factors including the lack of a standard methodology for the assessment of accuracy, robustness, reliability, and usability of the registration method. To address these issues, we present a systematic review of the state-of-the-art techniques for iUS-guided registration in spinal image-guided surgery (IGS). The review follows a new taxonomy based on the four steps involved in the surgical workflow that include pre-processing, registration initialization, estimation of the required patient to image transformation, and a visualization process. We provide a detailed analysis of the measurements in terms of accuracy, robustness, reliability, and usability that need to be met during the evaluation of a spinal IGS framework. Although this review is focused on spinal navigation, we expect similar evaluation criteria to be relevant for other IGS applications

    Ultrasound-based navigated pedicle screw insertion without intraoperative radiation: feasibility study on porcine cadavers

    Get PDF
    BACKGROUND Navigation systems for spinal fusion surgery rely on intraoperative computed tomography (CT) or fluoroscopy imaging. Both expose patient, surgeons and operating room staff to significant amounts of radiation. Alternative methods involving intraoperative ultrasound (iUS) imaging have recently shown promise for image-to-patient registration. Yet, the feasibility and safety of iUS navigation in spinal fusion have not been demonstrated. PURPOSE To evaluate the accuracy of pedicle screw insertion in lumbar and thoracolumbar spinal fusion using a fully automated iUS navigation system. STUDY DESIGN Prospective porcine cadaver study. METHODS Five porcine cadavers were used to instrument the lumbar and thoracolumbar spine using posterior open surgery. During the procedure, iUS images were acquired and used to establish automatic registration between the anatomy and preoperative CT images. Navigation was performed with the preoperative CT using tracked instruments. The accuracy of the system was measured as the distance of manually collected points to the preoperative CT vertebral surface and compared against fiducial-based registration. A postoperative CT was acquired, and screw placements were manually verified. We report breach rates, as well as axial and sagittal screw deviations. RESULTS A total of 56 screws were inserted (5.50 mm diameter n=50, and 6.50 mm diameter n=6). Fifty-two screws were inserted safely without breach. Four screws (7.14%) presented a medial breach with an average deviation of 1.35±0.37 mm (all <2 mm). Two breaches were caused by 6.50 mm diameter screws, and two by 5.50 mm screws. For vertebrae instrumented with 5.50 mm screws, the average axial diameter of the pedicle was 9.29 mm leaving a 1.89 mm margin in the left and right pedicle. For vertebrae instrumented with 6.50 mm screws, the average axial diameter of the pedicle was 8.99 mm leaving a 1.24 mm error margin in the left and right pedicle. The average distance to the vertebral surface was 0.96 mm using iUS registration and 0.97 mm using fiducial-based registration. CONCLUSIONS We successfully implanted all pedicle screws in the thoracolumbar spine using the ultrasound-based navigation system. All breaches recorded were minor (<2 mm) and the breach rate (7.14%) was comparable to existing literature. More investigation is needed to evaluate consistency, reproducibility, and performance in surgical context. CLINICAL SIGNIFICANCE Intraoperative US-based navigation is feasible and practical for pedicle screw insertion in a porcine model. It might be used as a low-cost and radiation-free alternative to intraoperative CT and fluoroscopy in the future

    Toward real-time rigid registration of intra-operative ultrasound with preoperative CT images for lumbar spinal fusion surgery

    Get PDF
    Purpose Accurate and effective registration of the vertebrae is crucial for spine surgical navigation procedures. Patient movement, surgical instrumentation or inadvertent contact with the tracked reference during the intervention may invalidate the registration, requiring a rapid correction of the misalignment. In this paper, we present a framework to rigidly align preoperative computed tomography (CT) with the intra-operative ultrasound (iUS) images of a single vertebra. Methods We use a single caudo-cranial axial sweep procedure to acquire iUS images, from which the scan trajectory is exploited to initialize the registration transform. To refine the transform, locations of the posterior vertebra surface are first extracted, then used to compute the CT-to-iUS image intensity gradient-based alignment. The approach was validated on a lumbosacral section of a porcine cadaver. Results We achieved an overall median accuracy of 1.48 mm (success rate of 84.42%) in ~11 s of computation time, satisfying the clinically accepted accuracy threshold of 2 mm. Conclusion Our approach using intra-operative ultrasound to register patient vertebral anatomy to preoperative images matches the clinical needs in terms of accuracy and computation time, facilitating its integration into the surgical workflow
    corecore